Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Lancet Digit Health ; 6(2): e114-e125, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38135556

RESUMEN

BACKGROUND: The rising global cancer burden has led to an increasing demand for imaging tests such as [18F]fluorodeoxyglucose ([18F]FDG)-PET-CT. To aid imaging specialists in dealing with high scan volumes, we aimed to train a deep learning artificial intelligence algorithm to classify [18F]FDG-PET-CT scans of patients with lymphoma with or without hypermetabolic tumour sites. METHODS: In this retrospective analysis we collected 16 583 [18F]FDG-PET-CTs of 5072 patients with lymphoma who had undergone PET-CT before or after treatment at the Memorial Sloa Kettering Cancer Center, New York, NY, USA. Using maximum intensity projection (MIP), three dimensional (3D) PET, and 3D CT data, our ResNet34-based deep learning model (Lymphoma Artificial Reader System [LARS]) for [18F]FDG-PET-CT binary classification (Deauville 1-3 vs 4-5), was trained on 80% of the dataset, and tested on 20% of this dataset. For external testing, 1000 [18F]FDG-PET-CTs were obtained from a second centre (Medical University of Vienna, Vienna, Austria). Seven model variants were evaluated, including MIP-based LARS-avg (optimised for accuracy) and LARS-max (optimised for sensitivity), and 3D PET-CT-based LARS-ptct. Following expert curation, areas under the curve (AUCs), accuracies, sensitivities, and specificities were calculated. FINDINGS: In the internal test cohort (3325 PET-CTs, 1012 patients), LARS-avg achieved an AUC of 0·949 (95% CI 0·942-0·956), accuracy of 0·890 (0·879-0·901), sensitivity of 0·868 (0·851-0·885), and specificity of 0·913 (0·899-0·925); LARS-max achieved an AUC of 0·949 (0·942-0·956), accuracy of 0·868 (0·858-0·879), sensitivity of 0·909 (0·896-0·924), and specificity of 0·826 (0·808-0·843); and LARS-ptct achieved an AUC of 0·939 (0·930-0·948), accuracy of 0·875 (0·864-0·887), sensitivity of 0·836 (0·817-0·855), and specificity of 0·915 (0·901-0·927). In the external test cohort (1000 PET-CTs, 503 patients), LARS-avg achieved an AUC of 0·953 (0·938-0·966), accuracy of 0·907 (0·888-0·925), sensitivity of 0·874 (0·843-0·904), and specificity of 0·949 (0·921-0·960); LARS-max achieved an AUC of 0·952 (0·937-0·965), accuracy of 0·898 (0·878-0·916), sensitivity of 0·899 (0·871-0·926), and specificity of 0·897 (0·871-0·922); and LARS-ptct achieved an AUC of 0·932 (0·915-0·948), accuracy of 0·870 (0·850-0·891), sensitivity of 0·827 (0·793-0·863), and specificity of 0·913 (0·889-0·937). INTERPRETATION: Deep learning accurately distinguishes between [18F]FDG-PET-CT scans of lymphoma patients with and without hypermetabolic tumour sites. Deep learning might therefore be potentially useful to rule out the presence of metabolically active disease in such patients, or serve as a second reader or decision support tool. FUNDING: National Institutes of Health-National Cancer Institute Cancer Center Support Grant.


Asunto(s)
Aprendizaje Profundo , Linfoma , Estados Unidos , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Fluorodesoxiglucosa F18 , Estudios Retrospectivos , Inteligencia Artificial , Radiofármacos , Linfoma/diagnóstico por imagen
2.
JMIR Res Protoc ; 12: e49204, 2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-37971801

RESUMEN

BACKGROUND: The increasing use of smartphones, wearables, and connected devices has enabled the increasing application of digital technologies for research. Remote digital study platforms comprise a patient-interfacing digital application that enables multimodal data collection from a mobile app and connected sources. They offer an opportunity to recruit at scale, acquire data longitudinally at a high frequency, and engage study participants at any time of the day in any place. Few published descriptions of centralized digital research platforms provide a framework for their development. OBJECTIVE: This study aims to serve as a road map for those seeking to develop a centralized digital research platform. We describe the technical and functional aspects of the ehive app, the centralized digital research platform of the Hasso Plattner Institute for Digital Health at Mount Sinai Hospital, New York, New York. We then provide information about ongoing studies hosted on ehive, including usership statistics and data infrastructure. Finally, we discuss our experience with ehive in the broader context of the current landscape of digital health research platforms. METHODS: The ehive app is a multifaceted and patient-facing central digital research platform that permits the collection of e-consent for digital health studies. An overview of its development, its e-consent process, and the tools it uses for participant recruitment and retention are provided. Data integration with the platform and the infrastructure supporting its operations are discussed; furthermore, a description of its participant- and researcher-facing dashboard interfaces and the e-consent architecture is provided. RESULTS: The ehive platform was launched in 2020 and has successfully hosted 8 studies, namely 6 observational studies and 2 clinical trials. Approximately 1484 participants downloaded the app across 36 states in the United States. The use of recruitment methods such as bulk messaging through the EPIC electronic health records and standard email portals enables broad recruitment. Light-touch engagement methods, used in an automated fashion through the platform, maintain high degrees of engagement and retention. The ehive platform demonstrates the successful deployment of a central digital research platform that can be modified across study designs. CONCLUSIONS: Centralized digital research platforms such as ehive provide a novel tool that allows investigators to expand their research beyond their institution, engage in large-scale longitudinal studies, and combine multimodal data streams. The ehive platform serves as a model for groups seeking to develop similar digital health research programs. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/49204.

3.
Neurobiol Aging ; 130: 80-83, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37473581

RESUMEN

Amyotrophic lateral sclerosis (ALS) is a devastating neuromuscular disease with limited therapeutic options. Biomarkers are needed for early disease detection, clinical trial design, and personalized medicine. Early evidence suggests that specific morphometric features in ALS primary skin fibroblasts may be used as biomarkers; however, this hypothesis has not been rigorously tested in conclusively large fibroblast populations. Here, we imaged ALS-relevant organelles (mitochondria, endoplasmic reticulum, lysosomes) and proteins (TAR DNA-binding protein 43, Ras GTPase-activating protein-binding protein 1, heat-shock protein 60) at baseline and under stress perturbations and tested their predictive power on a total set of 443 human fibroblast lines from ALS and healthy individuals. Machine learning approaches were able to confidently predict stress perturbation states (ROC-AUC ∼0.99) but not disease groups or clinical features (ROC-AUC 0.58-0.64). Our findings indicate that multivariate models using patient-derived fibroblast morphometry can accurately predict different stressors but are insufficient to develop viable ALS biomarkers.


Asunto(s)
Esclerosis Amiotrófica Lateral , Humanos , Esclerosis Amiotrófica Lateral/diagnóstico , Esclerosis Amiotrófica Lateral/metabolismo , Biomarcadores , Retículo Endoplásmico/metabolismo , Aprendizaje Automático , Fibroblastos/metabolismo
4.
Arch Pathol Lab Med ; 147(10): 1178-1185, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-36538386

RESUMEN

CONTEXT.­: Prostate cancer diagnosis rests on accurate assessment of tissue by a pathologist. The application of artificial intelligence (AI) to digitized whole slide images (WSIs) can aid pathologists in cancer diagnosis, but robust, diverse evidence in a simulated clinical setting is lacking. OBJECTIVE.­: To compare the diagnostic accuracy of pathologists reading WSIs of prostatic biopsy specimens with and without AI assistance. DESIGN.­: Eighteen pathologists, 2 of whom were genitourinary subspecialists, evaluated 610 prostate needle core biopsy WSIs prepared at 218 institutions, with the option for deferral. Two evaluations were performed sequentially for each WSI: initially without assistance, and immediately thereafter aided by Paige Prostate (PaPr), a deep learning-based system that provides a WSI-level binary classification of suspicious for cancer or benign and pinpoints the location that has the greatest probability of harboring cancer on suspicious WSIs. Pathologists' changes in sensitivity and specificity between the assisted and unassisted modalities were assessed, together with the impact of PaPr output on the assisted reads. RESULTS.­: Using PaPr, pathologists improved their sensitivity and specificity across all histologic grades and tumor sizes. Accuracy gains on both benign and cancerous WSIs could be attributed to PaPr, which correctly classified 100% of the WSIs showing corrected diagnoses in the PaPr-assisted phase. CONCLUSIONS.­: This study demonstrates the effectiveness and safety of an AI tool for pathologists in simulated diagnostic practice, bridging the gap between computational pathology research and its clinical application, and resulted in the first US Food and Drug Administration authorization of an AI system in pathology.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Próstata , Masculino , Humanos , Próstata/patología , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias de la Próstata/diagnóstico , Neoplasias de la Próstata/patología , Biopsia con Aguja
5.
Am J Pathol ; 193(3): 341-349, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36563747

RESUMEN

Osteosarcoma is the most common primary bone cancer, whose standard treatment includes pre-operative chemotherapy followed by resection. Chemotherapy response is used for prognosis and management of patients. Necrosis is routinely assessed after chemotherapy from histology slides on resection specimens, where necrosis ratio is defined as the ratio of necrotic tumor/overall tumor. Patients with necrosis ratio ≥90% are known to have a better outcome. Manual microscopic review of necrosis ratio from multiple glass slides is semiquantitative and can have intraobserver and interobserver variability. In this study, an objective and reproducible deep learning-based approach was proposed to estimate necrosis ratio with outcome prediction from scanned hematoxylin and eosin whole slide images (WSIs). To conduct the study, 103 osteosarcoma cases with 3134 WSIs were collected. Deep Multi-Magnification Network was trained to segment multiple tissue subtypes, including viable tumor and necrotic tumor at a pixel level and to calculate case-level necrosis ratio from multiple WSIs. Necrosis ratio estimated by the segmentation model highly correlates with necrosis ratio from pathology reports manually assessed by experts. Furthermore, patients were successfully stratified to predict overall survival with P = 2.4 × 10-6 and progression-free survival with P = 0.016. This study indicates that deep learning can support pathologists as an objective tool to analyze osteosarcoma from histology for assessing treatment response and predicting patient outcome.


Asunto(s)
Neoplasias Óseas , Aprendizaje Profundo , Osteosarcoma , Humanos , Neoplasias Óseas/tratamiento farmacológico , Neoplasias Óseas/patología , Pronóstico , Necrosis/patología , Osteosarcoma/tratamiento farmacológico , Osteosarcoma/patología
6.
J Pathol Inform ; 14: 100160, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36536772

RESUMEN

Deep learning has been widely used to analyze digitized hematoxylin and eosin (H&E)-stained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixel-wise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its time-consuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and non-cancer regions on giga-pixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and training/finetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted high-grade serous ovarian cancer patches, we attempted to train an additional classification deep learning model to predict BRCA mutation. The segmentation model and code have been released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.

7.
Acta Neuropathol Commun ; 10(1): 131, 2022 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-36127723

RESUMEN

Age-related cognitive impairment is multifactorial, with numerous underlying and frequently co-morbid pathological correlates. Amyloid beta (Aß) plays a major role in Alzheimer's type age-related cognitive impairment, in addition to other etiopathologies such as Aß-independent hyperphosphorylated tau, cerebrovascular disease, and myelin damage, which also warrant further investigation. Classical methods, even in the setting of the gold standard of postmortem brain assessment, involve semi-quantitative ordinal staging systems that often correlate poorly with clinical outcomes, due to imperfect cognitive measurements and preconceived notions regarding the neuropathologic features that should be chosen for study. Improved approaches are needed to identify histopathological changes correlated with cognition in an unbiased way. We used a weakly supervised multiple instance learning algorithm on whole slide images of human brain autopsy tissue sections from a group of elderly donors to predict the presence or absence of cognitive impairment (n = 367 with cognitive impairment, n = 349 without). Attention analysis allowed us to pinpoint the underlying subregional architecture and cellular features that the models used for the prediction in both brain regions studied, the medial temporal lobe and frontal cortex. Despite noisy labels of cognition, our trained models were able to predict the presence of cognitive impairment with a modest accuracy that was significantly greater than chance. Attention-based interpretation studies of the features most associated with cognitive impairment in the top performing models suggest that they identified myelin pallor in the white matter. Our results demonstrate a scalable platform with interpretable deep learning to identify unexpected aspects of pathology in cognitive impairment that can be translated to the study of other neurobiological disorders.


Asunto(s)
Disfunción Cognitiva , Aprendizaje Profundo , Anciano , Péptidos beta-Amiloides/metabolismo , Encéfalo/patología , Disfunción Cognitiva/patología , Humanos , Vaina de Mielina/patología
8.
J Invest Dermatol ; 142(1): 97-103, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34265329

RESUMEN

Basal cell carcinoma (BCC) is the most common skin cancer, with over 2 million cases diagnosed annually in the United States. Conventionally, BCC is diagnosed by naked eye examination and dermoscopy. Suspicious lesions are either removed or biopsied for histopathological confirmation, thus lowering the specificity of noninvasive BCC diagnosis. Recently, reflectance confocal microscopy, a noninvasive diagnostic technique that can image skin lesions at cellular level resolution, has shown to improve specificity in BCC diagnosis and reduced the number needed to biopsy by 2-3 times. In this study, we developed and evaluated a deep learning-based artificial intelligence model to automatically detect BCC in reflectance confocal microscopy images. The proposed model achieved an area under the curve for the receiver operator characteristic curve of 89.7% (stack level) and 88.3% (lesion level), a performance on par with that of reflectance confocal microscopy experts. Furthermore, the model achieved an area under the curve of 86.1% on a held-out test set from international collaborators, demonstrating the reproducibility and generalizability of the proposed automated diagnostic approach. These results provide a clear indication that the clinical deployment of decision support systems for the detection of BCC in reflectance confocal microscopy images has the potential for optimizing the evaluation and diagnosis of patients with skin cancer.


Asunto(s)
Carcinoma Basocelular/diagnóstico , Aprendizaje Profundo/normas , Neoplasias Cutáneas/diagnóstico , Adulto , Anciano , Anciano de 80 o más Años , Inteligencia Artificial , Automatización , Biopsia , Dermoscopía/métodos , Femenino , Humanos , Masculino , Microscopía Confocal , Persona de Mediana Edad , Modelos Biológicos , Examen Físico , Reproducibilidad de los Resultados
9.
J Pathol Inform ; 12: 31, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34760328

RESUMEN

BACKGROUND: Web-based digital slide viewers for pathology commonly use OpenSlide and OpenSeadragon (OSD) to access, visualize, and navigate whole-slide images (WSI). Their standard settings represent WSI as deep zoom images (DZI), a generic image pyramid structure that differs from the proprietary pyramid structure in the WSI files. The transformation from WSI to DZI is an additional, time-consuming step when rendering digital slides in the viewer, and inefficiency of digital slide viewers is a major criticism for digital pathology. AIMS: To increase efficiency of digital slide visualization by serving tiles directly from the native WSI pyramid, making the transformation from WSI to DZI obsolete. METHODS: We implemented a new flexible tile source for OSD that accepts arbitrary native pyramid structures instead of DZI levels. We measured its performance on a data set of 8104 WSI reviewed by 207 pathologists over 40 days in a web-based digital slide viewer used for routine diagnostics. RESULTS: The new FlexTileSource accelerates the display of a field of view in general by 67 ms and even by 117 ms if the block size of the WSI and the tile size of the viewer is increased to 1024 px. We provide the code of our open-source library freely on https://github.com/schuefflerlab/openseadragon. CONCLUSIONS: This is the first study to quantify visualization performance on a web-based slide viewer at scale, taking block size and tile size of digital slides into account. Quantifying performance will enable to compare and improve web-based viewers and therewith facilitate the adoption of digital pathology.

10.
J Am Med Inform Assoc ; 28(9): 1874-1884, 2021 08 13.
Artículo en Inglés | MEDLINE | ID: mdl-34260720

RESUMEN

OBJECTIVE: Broad adoption of digital pathology (DP) is still lacking, and examples for DP connecting diagnostic, research, and educational use cases are missing. We blueprint a holistic DP solution at a large academic medical center ubiquitously integrated into clinical workflows; researchapplications including molecular, genetic, and tissue databases; and educational processes. MATERIALS AND METHODS: We built a vendor-agnostic, integrated viewer for reviewing, annotating, sharing, and quality assurance of digital slides in a clinical or research context. It is the first homegrown viewer cleared by New York State provisional approval in 2020 for primary diagnosis and remote sign-out during the COVID-19 (coronavirus disease 2019) pandemic. We further introduce an interconnected Honest Broker for BioInformatics Technology (HoBBIT) to systematically compile and share large-scale DP research datasets including anonymized images, redacted pathology reports, and clinical data of patients with consent. RESULTS: The solution has been operationally used over 3 years by 926 pathologists and researchers evaluating 288 903 digital slides. A total of 51% of these were reviewed within 1 month after scanning. Seamless integration of the viewer into 4 hospital systems clearly increases the adoption of DP. HoBBIT directly impacts the translation of knowledge in pathology into effective new health measures, including artificial intelligence-driven detection models for prostate cancer, basal cell carcinoma, and breast cancer metastases, developed and validated on thousands of cases. CONCLUSIONS: We highlight major challenges and lessons learned when going digital to provide orientation for other pathologists. Building interconnected solutions will not only increase adoption of DP, but also facilitate next-generation computational pathology at scale for enhanced cancer research.


Asunto(s)
COVID-19 , Informática Médica/tendencias , Neoplasias , Patología Clínica , Centros Médicos Académicos , Inteligencia Artificial , COVID-19/diagnóstico , Humanos , Masculino , Neoplasias/diagnóstico , Pandemias , Patología Clínica/tendencias
11.
J Pathol Inform ; 12: 9, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34012713

RESUMEN

BACKGROUND: The development of artificial intelligence (AI) in pathology frequently relies on digitally annotated whole slide images (WSI). The creation of these annotations - manually drawn by pathologists in digital slide viewers - is time consuming and expensive. At the same time, pathologists routinely annotate glass slides with a pen to outline cancerous regions, for example, for molecular assessment of the tissue. These pen annotations are currently considered artifacts and excluded from computational modeling. METHODS: We propose a novel method to segment and fill hand-drawn pen annotations and convert them into a digital format to make them accessible for computational models. Our method is implemented in Python as an open source, publicly available software tool. RESULTS: Our method is able to extract pen annotations from WSI and save them as annotation masks. On a data set of 319 WSI with pen markers, we validate our algorithm segmenting the annotations with an overall Dice metric of 0.942, Precision of 0.955, and Recall of 0.943. Processing all images takes 15 min in contrast to 5 h manual digital annotation time. Further, the approach is robust against text annotations. CONCLUSIONS: We envision that our method can take advantage of already pen-annotated slides in scenarios in which the annotations would be helpful for training computational models. We conclude that, considering the large archives of many pathology departments that are currently being digitized, our method will help to collect large numbers of training samples from those data.

12.
Mod Pathol ; 34(8): 1487-1494, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33903728

RESUMEN

The surgical margin status of breast lumpectomy specimens for invasive carcinoma and ductal carcinoma in situ (DCIS) guides clinical decisions, as positive margins are associated with higher rates of local recurrence. The "cavity shave" method of margin assessment has the benefits of allowing the surgeon to orient shaved margins intraoperatively and the pathologist to assess one inked margin per specimen. We studied whether a deep convolutional neural network, a deep multi-magnification network (DMMN), could accurately segment carcinoma from benign tissue in whole slide images (WSIs) of shave margin slides, and therefore serve as a potential screening tool to improve the efficiency of microscopic evaluation of these specimens. Applying the pretrained DMMN model, or the initial model, to a validation set of 408 WSIs (348 benign, 60 with carcinoma) achieved an area under the curve (AUC) of 0.941. After additional manual annotations and fine-tuning of the model, the updated model achieved an AUC of 0.968 with sensitivity set at 100% and corresponding specificity of 78%. We applied the initial model and updated model to a testing set of 427 WSIs (374 benign, 53 with carcinoma) which showed AUC values of 0.900 and 0.927, respectively. Using the pixel classification threshold selected from the validation set, the model achieved a sensitivity of 92% and specificity of 78%. The four false-negative classifications resulted from two small foci of DCIS (1 mm, 0.5 mm) and two foci of well-differentiated invasive carcinoma (3 mm, 1.5 mm). This proof-of-principle study demonstrates that a DMMN machine learning model can segment invasive carcinoma and DCIS in surgical margin specimens with high accuracy and has the potential to be used as a screening tool for pathologic assessment of these specimens.


Asunto(s)
Neoplasias de la Mama/patología , Carcinoma Ductal de Mama/patología , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Márgenes de Escisión , Carcinoma Intraductal no Infiltrante/patología , Femenino , Humanos , Mastectomía Segmentaria , Neoplasia Residual/diagnóstico
13.
J Pathol ; 254(2): 147-158, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33904171

RESUMEN

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data. The algorithm was employed to classify slides into two categories: benign (no further review needed) or suspicious (additional histologic and/or immunohistochemical analysis required). We assessed the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs) of a local pathologist, two central pathologists, and Paige Prostate in the diagnosis of 600 transrectal ultrasound-guided prostate needle core biopsy regions ('part-specimens') from 100 consecutive patients, and to ascertain the impact of Paige Prostate on diagnostic accuracy and efficiency. Paige Prostate displayed high sensitivity (0.99; CI 0.96-1.0), NPV (1.0; CI 0.98-1.0), and specificity (0.93; CI 0.90-0.96) at the part-specimen level. At the patient level, Paige Prostate displayed optimal sensitivity (1.0; CI 0.93-1.0) and NPV (1.0; CI 0.91-1.0) at a specificity of 0.78 (CI 0.64-0.89). The 27 part-specimens considered by Paige Prostate as suspicious, whose final diagnosis was benign, were found to comprise atrophy (n = 14), atrophy and apical prostate tissue (n = 1), apical/benign prostate tissue (n = 9), adenosis (n = 2), and post-atrophic hyperplasia (n = 1). Paige Prostate resulted in the identification of four additional patients whose diagnoses were upgraded from benign/suspicious to malignant. Additionally, this AI-based test provided an estimated 65.5% reduction of the diagnostic time for the material analyzed. Given its optimal sensitivity and NPV, Paige Prostate has the potential to be employed for the automated identification of patients whose histologic slides could forgo full histopathologic review. In addition to providing incremental improvements in diagnostic accuracy and efficiency, this AI-based system identified patients whose prostate cancers were not initially diagnosed by three experienced histopathologists. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Próstata/diagnóstico , Anciano , Anciano de 80 o más Años , Biopsia , Biopsia con Aguja Gruesa , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Patólogos , Próstata/patología , Neoplasias de la Próstata/patología
14.
Comput Med Imaging Graph ; 88: 101866, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33485058

RESUMEN

Pathologic analysis of surgical excision specimens for breast carcinoma is important to evaluate the completeness of surgical excision and has implications for future treatment. This analysis is performed manually by pathologists reviewing histologic slides prepared from formalin-fixed tissue. In this paper, we present Deep Multi-Magnification Network trained by partial annotation for automated multi-class tissue segmentation by a set of patches from multiple magnifications in digitized whole slide images. Our proposed architecture with multi-encoder, multi-decoder, and multi-concatenation outperforms other single and multi-magnification-based architectures by achieving the highest mean intersection-over-union, and can be used to facilitate pathologists' assessments of breast cancer.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos
16.
Mod Pathol ; 33(11): 2169-2185, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32467650

RESUMEN

Pathologists are responsible for rapidly providing a diagnosis on critical health issues. Challenging cases benefit from additional opinions of pathologist colleagues. In addition to on-site colleagues, there is an active worldwide community of pathologists on social media for complementary opinions. Such access to pathologists worldwide has the capacity to improve diagnostic accuracy and generate broader consensus on next steps in patient care. From Twitter we curate 13,626 images from 6,351 tweets from 25 pathologists from 13 countries. We supplement the Twitter data with 113,161 images from 1,074,484 PubMed articles. We develop machine learning and deep learning models to (i) accurately identify histopathology stains, (ii) discriminate between tissues, and (iii) differentiate disease states. Area Under Receiver Operating Characteristic (AUROC) is 0.805-0.996 for these tasks. We repurpose the disease classifier to search for similar disease states given an image and clinical covariates. We report precision@k = 1 = 0.7618 ± 0.0018 (chance 0.397 ± 0.004, mean ±stdev ). The classifiers find that texture and tissue are important clinico-visual features of disease. Deep features trained only on natural images (e.g., cats and dogs) substantially improved search performance, while pathology-specific deep features and cell nuclei features further improved search to a lesser extent. We implement a social media bot (@pathobot on Twitter) to use the trained classifiers to aid pathologists in obtaining real-time feedback on challenging cases. If a social media post containing pathology text and images mentions the bot, the bot generates quantitative predictions of disease state (normal/artifact/infection/injury/nontumor, preneoplastic/benign/low-grade-malignant-potential, or malignant) and lists similar cases across social media and PubMed. Our project has become a globally distributed expert system that facilitates pathological diagnosis and brings expertise to underserved regions or hospitals with less expertise in a particular disease. This is the first pan-tissue pan-disease (i.e., from infection to malignancy) method for prediction and search on social media, and the first pathology study prospectively tested in public on social media. We will share data through http://pathobotology.org . We expect our project to cultivate a more connected world of physicians and improve patient care worldwide.


Asunto(s)
Aprendizaje Profundo , Patología , Medios de Comunicación Sociales , Algoritmos , Humanos , Patólogos
17.
Mod Pathol ; 33(10): 2058-2066, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32393768

RESUMEN

Prostate cancer (PrCa) is the second most common cancer among men in the United States. The gold standard for detecting PrCa is the examination of prostate needle core biopsies. Diagnosis can be challenging, especially for small, well-differentiated cancers. Recently, machine learning algorithms have been developed for detecting PrCa in whole slide images (WSIs) with high test accuracy. However, the impact of these artificial intelligence systems on pathologic diagnosis is not known. To address this, we investigated how pathologists interact with Paige Prostate Alpha, a state-of-the-art PrCa detection system, in WSIs of prostate needle core biopsies stained with hematoxylin and eosin. Three AP-board certified pathologists assessed 304 anonymized prostate needle core biopsy WSIs in 8 hours. The pathologists classified each WSI as benign or cancerous. After ~4 weeks, pathologists were tasked with re-reviewing each WSI with the aid of Paige Prostate Alpha. For each WSI, Paige Prostate Alpha was used to perform cancer detection and, for WSIs where cancer was detected, the system marked the area where cancer was detected with the highest probability. The original diagnosis for each slide was rendered by genitourinary pathologists and incorporated any ancillary studies requested during the original diagnostic assessment. Against this ground truth, the pathologists and Paige Prostate Alpha were measured. Without Paige Prostate Alpha, pathologists had an average sensitivity of 74% and an average specificity of 97%. With Paige Prostate Alpha, the average sensitivity for pathologists significantly increased to 90% with no statistically significant change in specificity. With Paige Prostate Alpha, pathologists more often correctly classified smaller, lower grade tumors, and spent less time analyzing each WSI. Future studies will investigate if similar benefit is yielded when such a system is used to detect other forms of cancer in a setting that more closely emulates real practice.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Patología Clínica/métodos , Neoplasias de la Próstata/diagnóstico , Biopsia con Aguja Gruesa , Humanos , Masculino
18.
J Subst Abuse Treat ; 108: 33-39, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31358328

RESUMEN

INTRODUCTION: The federal Opioid State Targeted Response (Opioid STR) grants provided funding to each state to ramp up the range of responses to reverse the ongoing opioid crisis in the U.S. Washington State used these funds to develop and implement an integrated care model to expand access to medication treatment and reduce unmet need for people with opioid use disorders (OUD), regardless of how they enter the treatment system. This paper examines the design, early implementation and results of the Washington State Hub and Spoke Model. METHODS: Descriptive data were gathered from key informants, document review, and aggregate data reported by hubs and spokes to Washington State's Opioid STR team. RESULTS: The Washington State Hub and Spoke Model reflects a flexible approach that incorporates primary care and substance use treatment programs, as well as outreach, referral and social service organizations, and a nurse care manager. Hubs could be any type of program that had the required expertise and capacity to lead their network in medication treatment for OUD, including all three FDA-approved medications. Six hub-spoke networks were funded, with 8 unique agencies on average, and multiple sites. About 150 prescribers are in these networks (25 on average). In the first 18 months, nearly 5000 people were inducted onto OUD medication treatment: 73% on buprenorphine, 19% on methadone, and 9% on naltrexone. CONCLUSIONS: The Washington State Hub and Spoke Model built on prior approaches to improve the delivery system for OUD medication treatment and support services, by increasing integration of care, ensuring "no wrong door," engaging with community agencies, and supporting providers who are offering medication treatment. It used essential elements from existing integrated care OUD treatment models, but allowed for organic restructuring to meet the population needs within a community. To date, there have been challenges and successes, but with this approach, Washington State has provided medication treatment for OUD to nearly 5000 people. Sustainability efforts are underway. In the face of the ongoing opioid crisis, it remains essential to develop, implement and evaluate novel models, such as Washington's Hub and Spoke approach, to improve treatment access and increase capacity.


Asunto(s)
Buprenorfina/uso terapéutico , Programas de Gobierno/economía , Accesibilidad a los Servicios de Salud/organización & administración , Antagonistas de Narcóticos/uso terapéutico , Trastornos Relacionados con Opioides/tratamiento farmacológico , Atención Primaria de Salud/organización & administración , Programas de Gobierno/legislación & jurisprudencia , Humanos , Tratamiento de Sustitución de Opiáceos , Derivación y Consulta , Gobierno Estatal , Washingtón
19.
Nat Med ; 25(8): 1301-1309, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31308507

RESUMEN

The development of decision support systems for pathology and their deployment in clinical practice have been hindered by the need for large manually annotated datasets. To overcome this problem, we present a multiple instance learning-based deep learning system that uses only the reported diagnoses as labels for training, thereby avoiding expensive and time-consuming pixel-wise manual annotations. We evaluated this framework at scale on a dataset of 44,732 whole slide images from 15,187 patients without any form of data curation. Tests on prostate cancer, basal cell carcinoma and breast cancer metastases to axillary lymph nodes resulted in areas under the curve above 0.98 for all cancer types. Its clinical application would allow pathologists to exclude 65-75% of slides while retaining 100% sensitivity. Our results show that this system has the ability to train accurate classification models at unprecedented scale, laying the foundation for the deployment of computational decision support systems in clinical practice.


Asunto(s)
Neoplasias de la Mama/patología , Carcinoma Basocelular/patología , Aprendizaje Profundo , Neoplasias de la Próstata/patología , Sistemas de Apoyo a Decisiones Clínicas , Femenino , Humanos , Masculino , Clasificación del Tumor
20.
Med Image Anal ; 54: 253-262, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30954852

RESUMEN

The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones , Aprendizaje Profundo , Humanos , Aumento de la Imagen/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...